21 research outputs found

    Robust and Heterogenous Odds Ratio: Estimating Price Sensitivity for Unbought Items

    Get PDF
    Problem definition: Mining for heterogeneous responses to an intervention is a crucial step for data-driven operations, for instance to personalize treatment or pricing. We investigate how to estimate price sensitivity from transaction-level data. In causal inference terms, we estimate heterogeneous treatment effects when (a) the response to treatment (here, whether a customer buys a product) is binary, and (b) treatment assignments are partially observed (here, full information is only available for purchased items). Methodology/Results: We propose a recursive partitioning procedure to estimate heterogeneous odds ratio, a widely used measure of treatment effect in medicine and social sciences. We integrate an adversarial imputation step to allow for robust estimation even in presence of partially observed treatment assignments. We validate our methodology on synthetic data and apply it to three case studies from political science, medicine, and revenue management. Managerial Implications: Our robust heterogeneous odds ratio estimation method is a simple and intuitive tool to quantify heterogeneity in patients or customers and personalize interventions, while lifting a central limitation in many revenue management data

    Hospital-Wide Inpatient Flow Optimization

    Get PDF
    An ideal that supports quality and delivery of care is to have hospital operations that are coordinated and optimized across all services in real-time. As a step toward this goal, we propose a multistage adaptive robust optimization approach combined with machine learning techniques. Informed by data and predictions, our framework unifies the bed assignment process across the entire hospital and accounts for present and future inpatient flows, discharges as well as bed requests – from the emergency department, scheduled surgeries and admissions, and outside transfers. We evaluate our approach through simulations calibrated on historical data from a large academic medical center. For the 600-bed institution, our optimization model was solved in seconds, reduced off-service placement by 24% on average, and boarding delays in the emergency department and post-anesthesia units by 35% and 18% respectively. We also illustrate the benefit from using adaptive linear decision rules instead of static assignment decisions

    A unified approach to mixed-integer optimization with logical constraints

    Get PDF
    We propose a united framework to address a family of classical mixed-component analysis, and sparse learning problems. These problems exhibit logical relationships between continuous and discrete variables, which are usually reformulated linearly using a big-M formulation. In this work, we challenge this longstanding modeling practice and express the logical constraints in a non-linear way. By imposing a regularization condition, we reformulate these problems as convex binary optimization problems, which are solvable using an outer-approximation procedure. In numerical experiments, we establish that a general-purpose numerical strategy, which combines cutting-plane, first-order, and local search methods, solves these problems faster and at a larger scale than state-of-the-art mixed-integer linear or second-order cone methods. Our approach successfully solves network design problems with 100s of nodes and provides solutions up to 40% better than the state-of-the-art; sparse portfolio selection problems with up to 3,200 securities compared with 400 securities for previous attempts; and sparse regression problems with up to 100,000 covariates

    Predicting inpatient flow at a major hospital using interpretable analytics

    Get PDF
    Problem definition: Turn raw data from Electronic Health Records into accurate predictions on patient flows and inform daily decision-making at a major hospital. Practical Relevance: In a hospital environment under increasing financial and operational stress, forecasts on patient demand patterns could help match capacity and demand and improve hospital operations. Methodology: We use data from 63; 432 admissions at a large academic hospital (50.0% female, median age 64 years old, median length-of-stay 3.12 days). We construct an expertise-driven patient representation on top of their EHR data and apply a broad class of machine learning methods to predict several aspects of patient flows. Results: With a unique patient representation, we estimate short-term discharges, identify long-stay patients, predict discharge destination and anticipate flows in and out of intensive care units with accuracy in the 80%+ range. More importantly, we implement this machine learning pipeline into the EHR system of the hospital and construct prediction-informed dashboards to support daily bed placement decisions. Managerial Implications: Our study demonstrates that interpretable machine learning techniques combined with EHR data can be used to provide visibility on patient flows. Our approach provides an alternative to deep learning techniques which is equally accurate, interpretable, frugal in data and computational power, and production-ready

    Mixed-Projection Conic Optimization: A New Paradigm for Modeling Rank Constraints

    Get PDF
    We propose a framework for modeling and solving low-rank optimization problems to certifiable optimality. We introduce symmetric projection matrices that satisfy Y 2 =Y , the matrix analog of binary variables that satisfy z2 = z, to model rank constraints. By leveraging regularization and strong duality, we prove that this modeling paradigm yields tractable convex optimization problems over the non-convex set of orthogonal projection matrices. Furthermore, we design outer-approximation algorithms to solve low-rank problems to certifiable optimality, compute lower bounds via their semidenite relaxations, and provide near optimal solutions through rounding and local search techniques. We implement these numerical ingredients and, for the first time, solve low-rank optimization problems to certifiable optimality. Our algorithms also supply certifiably near-optimal solutions for larger problem sizes and outperform existing heuristics, by deriving an alternative to the popular nuclear norm relaxation which generalizes the perspective relaxation from vectors to matrices. Using currently available spatial branch-and-bound codes, not tailored to projection matrices, we can scale our exact (resp. near-exact) algorithms to matrices with up to 30 (resp. 600) rows/columns. All in all, our framework, which we name Mixed-Projection Conic Optimization, solves low-rank problems to certifiable optimality in a tractable and unified fashion

    Solving Large-Scale Sparse PCA to Certifiable (Near) Optimality

    Get PDF
    Sparse principal component analysis (PCA) is a popular dimensionality reduction technique for obtaining principal components which are linear combinations of a small subset of the original features. Existing approaches cannot supply certifiably optimal principal components with more than *p* = 100s of variables. By reformulating sparse PCA as a convex mixed-integer semidefinite optimization problem, we design a cutting-plane method which solves the problem to certifiable optimality at the scale of selecting *k* = 5 covariates from *p*= 300 variables, and provides small bound gaps at a larger scale. We also propose a convex relaxation and greedy rounding scheme that provides bound gaps of 1 - 2% in practice within minutes for *p* = 100s or hours for *p*= 1; 000s and is therefore a viable alternative to the exact method at scale. Using real-world financial and medical data sets, we illustrate our approach's ability to derive interpretable principal components tractably at scale

    A new perspective on low-rank optimization

    Get PDF
    A key question in many low-rank problems throughout optimization, machine learning, and statistics is to characterize the convex hulls of simple low-rank sets and judiciously apply these convex hulls to obtain strong yet computationally tractable relaxations. We invoke the matrix perspective function the matrix analog of the perspective function to characterize explicitly the convex hull of epigraphs of simple matrix convex functions under low-rank constraints. Further, we combine the matrix perspective function with orthogonal projection matrices{the matrix analog of binary variables which capture the row-space of a matrix{to develop a matrix perspective reformulation technique that reliably obtains strong relaxations for a variety of low-rank problems, including reduced rank regression, non-negative matrix factorization, and factor analysis. Moreover, we establish that these relaxations can be modeled via semidenite constraints and thus optimized over tractably. The proposed approach parallels and generalizes the perspective reformulation technique in mixed-integer optimization and leads to new relaxations for a broad class of problems

    Robust convex optimization: A new perspective that unifies and extends

    Get PDF
    Robust convex constraints are difficult to handle, since finding the worst-case scenario is equivalent to maximizing a convex function. In this paper, we propose a new approach to deal with such constraints that unifies most approaches known in the literature and extends them in a significant way. The extension is either obtaining better solutions than the ones proposed in the literature, or obtaining solutions for classes of problems unaddressed by previous approaches. Our solution is based on an extension of the Reformulation-Linearization-Technique, and can be applied to general convex inequalities and general convex uncertainty sets. It generates a sequence of conservative approximations which can be used to obtain both upper- and lower- bounds for the optimal objective value. We illustrate the numerical benefit of our approach on a robust control and robust geometric optimization example

    Robust combination testing: methods and application to COVID-19 detection

    Get PDF
    Simple and affordable testing tools are often not accurate enough to be operationally relevant. For COVID-19 detection, rapid point-of-care tests are cheap and provide results in minutes, but largely fail policymakers' accuracy requirements. We propose an analytical methodology, based on robust optimization, that identifies optimal combinations of results from cheap tests for increased predictive accuracy. This methodological tool allows policymakers to credibly quantify the benefits from combination testing and thus break the trade-off between cost and accuracy. Our methodology is robust to noisy and partially missing input data and incorporates operational constraints-relevant considerations in practice. We apply our methodology to two datasets containing individual-level results of multiple COVID-19 rapid antibody and antigen tests, respectively, to generate Pareto-dominating receiver operating characteristic (ROC) curves. We find that combining only three rapid tests increases out-of-sample area under the curve (AUC) by 4% (6%) compared with the best performing individual test for antibody (antigen) detection. We also find that a policymaker who requires a specificity of at least 0.95 can improve sensitivity by 8% and 2% for antibody and antigen testing, respectively, relative to available combination testing heuristics. Our numerical analysis demonstrates that robust optimization is a powerful tool to avoid overfitting, accommodate missing data, and improve out-of-sample performance. Based on our analytical and empirical results, policymakers should consider approving and deploying a curated combination of cheap point-of-care tests in settings where `gold standard' tests are too expensive or too slow

    Minkowski Centers via Robust Optimization: Computation and Applications

    Get PDF
    Centers of convex sets are geometric objects that have received extensive attention in the mathematical and optimization literature, both from a theoretical and practical standpoint. For instance, they serve as initialization points for many algorithms such as interior-point, hit-and-run, or cutting-planes methods. First, we observe that computing a Minkowski center of a convex set can be formulated as the solution of a robust optimization problem. As such, we can derive tractable formulations for computing Minkowski centers of polyhedra and convex hulls. Computationally, we illustrate that using Minkowski centers, instead of analytic or Chebyshev centers, improves the convergence of hit-and-run and cutting-plane algorithms. We also provide efficient numerical strategies for computing centers of the projection of polyhedra and of the intersection of two ellipsoids
    corecore